transposition table
Formal Verification of Minimax Algorithms
Wesselink, Wieger, Huizing, Kees, van de Wetering, Huub
Using the Dafny verification system, we formally verify a range of minimax search algorithms, including variations with alpha-beta pruning and transposition tables. For depth-limited search with transposition tables, we introduce a witness-based correctness criterion and apply it to two representative algorithms. All verification artifacts, including proofs and Python implementations, are publicly available.
- Europe > Netherlands > Limburg > Maastricht (0.04)
- Europe > Netherlands > North Brabant > Eindhoven (0.04)
- Africa > Senegal > Dakar Region > Dakar (0.04)
On some improvements to Unbounded Minimax
Cohen-Solal, Quentin, Cazenave, Tristan
This paper presents the first experimental evaluation of four previously untested modifications of Unbounded Best-First Minimax algorithm. This algorithm explores the game tree by iteratively expanding the most promising sequences of actions based on the current partial game tree. We first evaluate the use of transposition tables, which convert the game tree into a directed acyclic graph by merging duplicate states. Second, we compare the original algorithm by Korf & Chickering with the variant proposed by Cohen-Solal, which differs in its backpropagation strategy: instead of stopping when a stable value is encountered, it updates values up to the root. This change slightly improves performance when value ties or transposition tables are involved. Third, we assess replacing the exact terminal evaluation function with the learned heuristic function. While beneficial when exact evaluations are costly, this modification reduces performance in inexpensive settings. Finally, we examine the impact of the completion technique that prioritizes resolved winning states and avoids resolved losing states. This technique also improves performance. Overall, our findings highlight how targeted modifications can enhance the efficiency of Unbounded Best-First Minimax.
- Europe > Italy > Piedmont > Turin Province > Turin (0.04)
- Europe > France > Île-de-France > Paris > Paris (0.04)
Improving Minimax performance
The Minimax algorithm, also known as MinMax, is a popular algorithm for calculating the best possible move a player can player in a zero-sume game, like Tic-Tac-Toe or Chess. It makes use of an evaluation-function provided by the developer to analyze a given game board. During the execution Minimax builds a game tree that might become quite large. This causes a very long runtime for the algorithm. In this article I'd like to introduce 10 methods to improve the performance of the Minimax algorithm and to optimize its runtime.
Towards solving the 7-in-a-row game
Czifra, Domonkos, Csóka, Endre, Zombori, Zsolt, Makay, Géza
Our paper explores the game theoretic value of the 7-in-a-row game. We reduce the problem to solving a finite board game, which we target using Proof Number Search. We present a number of heuristic improvements to Proof Number Search and examine their effect within the context of this particular game. Although our paper does not solve the 7-in-a-row game, our experiments indicate that we have made significant progress towards it.
- Europe > Hungary > Budapest > Budapest (0.04)
- Europe > Hungary > Csongrád-Csanád County > Szeged (0.04)
- Asia > Taiwan (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
Monte Carlo Game Solver
W e present a general algorithm to order moves so as to speedup exact game solvers. It uses online learning of playout policies and Monte Carlo Tree Search. The learned policy and the information in the Monte Carlo tree are used to order moves in game solvers. They improve greatly the solving time for multiple games.
- Europe > France (0.04)
- South America > Argentina > Pampas > Buenos Aires F.D. > Buenos Aires (0.04)
- Europe > Netherlands > Limburg > Maastricht (0.04)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.04)
The {\alpha}{\mu} Search Algorithm for the Game of Bridge
Cazenave, Tristan, Ventos, Véronique
{\alpha}{\mu} is an anytime heuristic search algorithm for incomplete information games that assumes perfect information for the opponents. {\alpha}{\mu} addresses the strategy fusion and non-locality problems encountered by Perfect Information Monte Carlo sampling. In this paper {\alpha}{\mu} is applied to the game of Bridge.
- North America > United States > Oregon > Multnomah County > Portland (0.04)
- Europe > France > Île-de-France > Paris > Paris (0.04)
A different take on the best-first game tree pruning algorithms
The alpha-beta pruning algorithms have been popular in game tree searching ever since they were discovered. Numerous enhancements are proposed in literature and it is often overwhelming as to which would be the best for implementation. A certain enhancement can take far too long to fine tune its hyper parameters or to decide whether it is going to not make much of a difference due to the memory limitations. On the other hand are the best first pruning techniques, mostly the counterparts of the infamous SSS* algorithm, the algorithm which proved out to be disruptive at the time of its discovery but gradually became outcast as being too memory intensive and having a higher time complexity. Later research doesn't see the best first approaches to be completely different from the depth first based enhancements but both seem to be transitionary in the sense that a best first approach could be looked as a depth first approach with a certain set of enhancements and with the growing power of the computers, SSS* didn't seem to be as taxing on the memory either. Even so, there seems to be quite difficulty in understanding the nature of the SSS* algorithm, why it does what it does and it being termed as being too complex to fathom, visualize and understand on an intellectual level. This article tries to bridge this gap and provide some experimental results comparing the two with the most promising advances.
In 1983, This Bell Labs Computer Was the First Machine to Become a Chess Master
Chess is a complicated game. It's a game of strategy between two opponents, but with no hidden information and all of the potential moves known by both players at the outset. With each turn, players communicate their intent and try to anticipate the possible countermoves. The ability to envision several moves in advance is a recipe for victory, and one that mathematicians and logicians have long found intriguing. Despite some early mechanical chess-playing machines--and at least one chess-playing hoax--mechanized chess play remained hypothetical until the advent of digital computing.
- Europe (0.49)
- North America > United States (0.48)
Dynamic Move Tables and Long Branches with Backtracking in Computer Chess
The idea of dynamic move chains has been described in a preceding paper [10]. Re-using an earlier piece of search allows the tree to be forward-pruned, which is known to be dangerous, because it can potentially remove new information that would only be realised through a more exhaustive search process. The justification is the integrity in the position and small changes between positions make it more likely that an earlier result still applies. Larger problems where exhaustive search is not possible would also like a method that can guess accurately. This paper has added to the forward-pruning technique by using 'move tables' that can act in the same way as Transposition Tables, but for moves not positions. They use an efficient memory structure and have put the design into the context of short or long-term memories. The long-term memory includes simply rote-learning of other players' games. The forward-pruning technique can also be fortified to help to remove some potential errors. Another idea is 'long branches'. This plays a short move sequence, before returning to a full search at the resulting leaf nodes. Therefore, with some configuration the dynamic tables can be reliably used and relatively independently of the position. This has advanced some of the future work theory of the earlier paper, and made more explicit where logical plans and more knowledge-based approaches might be applied. The author would argue that the process is a very human approach to searching for chess moves.
Generalized Rapid Action Value Estimation
Cazenave, Tristan (Université Paris-Dauphine)
Monte Carlo Tree Search (MCTS) is the state of the art algorithm for many games including the game of Go and General Game Playing (GGP). The standard algorithm for MCTS is Upper Confidence bounds applied to Trees (UCT). For games such as Go a big improvement over UCT is the Rapid Action Value Estimation (RAVE) heuristic. We propose to generalize the RAVE heuristic so as to have more accurate estimates near the leaves. We test the resulting algorithm named GRAVE for Atarigo, Knighthrough, Domineering and Go.
- North America > United States > Oregon > Benton County > Corvallis (0.04)
- Europe > France > Île-de-France > Paris > Paris (0.04)
- Asia > Taiwan (0.04)
- (2 more...)